Skip to main content

Home/ Future of the Web/ Group items tagged differences keep

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

Libraries - The Open Source Discovery Service - 0 views

  •  
    "Libraries.io monitors 834,262 open source libraries across 29 different package managers. You can discover new libraries to use in your software projects as well as be notified of new releases to keep your applications secure and up to date. "
  •  
    "Libraries.io monitors 834,262 open source libraries across 29 different package managers. You can discover new libraries to use in your software projects as well as be notified of new releases to keep your applications secure and up to date. "
Paul Merrell

Open Access Can't Wait. Pass FASTR Now. | Electronic Frontier Foundation - 1 views

  • When you pay for federally funded research, you should be allowed to read it. That’s the idea behind the Fair Access to Science and Technology Research Act (S.1701, H.R.3427), which was recently reintroduced in both houses of Congress. FASTR was first introduced in 2013, and while it has strong support in both parties, it has never gained enough momentum to pass. We need to change that. Let’s tell Congress that passing an open access law should be a top priority.
  • Tell Congress: It’s time to move FASTR The proposal is pretty simple: Under FASTR, every federal agency that spends more than $100 million on grants for research would be required to adopt an open access policy. The bill gives each agency flexibility to implement an open access policy suited to the work it funds, so long as research is available to the public after an “embargo period” of a year or less. One of the major points of contention around FASTR is how long that embargo period should be. Last year, the Senate Homeland Security and Governmental Affairs Committee approved FASTR unanimously, but only after extending that embargo period from six months to 12, putting FASTR in line with the 2013 White House open access memo. That’s the version that was recently reintroduced in the Senate.  The House bill, by contrast, sets the embargo period at six months. EFF supports a shorter period. Part of what’s important about open access is that it democratizes knowledge: when research is available to the public, you don’t need expensive journal subscriptions or paid access to academic databases in order to read it. A citizen scientist can use and build on the same body of knowledge as someone with institutional connections. But in the fast-moving world of scientific research, 12 months is an eternity. A shorter embargo is far from a radical proposition, especially in 2017. The landscape for academic publishing is very different from what it was when FASTR was first introduced, thanks in larger part to nongovernmental funders who already enforce open access mandates. Major foundations like Ford, Gates, and Hewlett have adopted strong open access policies requiring that research be not only available to the public, but also licensed to allow republishing and reuse by anyone.
  • Just last year, the Gates Foundation made headlines when it dropped the embargo period from its policy entirely, requiring that research be published openly immediately. After a brief standoff, major publishers began to accommodate Gates’ requirements. As a result, we finally have public confirmation of what we’ve always known: open access mandates don’t put publishers out of business; they push them to modernize their business models. Imagine how a strong open access mandate for government-funded research—with a requirement that that research be licensed openly—could transform publishing. FASTR may not be that law, but it’s a huge step in the right direction, and it’s the best option on the table today. Let’s urge Congress to pass a version of FASTR with an embargo period of six months or less, and then use it as a foundation for stronger open access in the future.
Gary Edwards

Brendan's Roadmap Updates: Open letter to Microsoft's Chris Wilson and their fight to s... - 0 views

  • The history of ECMAScript since its beginnings in November 1996 shows that when Microsoft was behind in the market (against Netscape in 1996-1997), it moved aggressively in the standards body to evolve standards starting with ES1 through ES3. Once Microsoft dominated the market, the last edition of the standard was left to rot -- ES3 was finished in 1999 -- and even easy-to-fix standards conformance bugs in IE JScript went unfixed for eight years (so three years to go from Edition 1 to 3, then over eight to approach Edition 4). Now that the proposed 4th edition looks like a competitive threat, the world suddenly hears in detail about all those bugs, spun as differences afflicting "JavaScript" that should inform a new standard.
  • In my opinion the notion that we need to add features so that ajax programming would be easier is plain wrong. ajax is a hack and also the notion of a webapp is a hack. the web was created in a document centric view. All w3c standards are also based on the same document notion. The heart of the web, the HTTP protocol is designed to support a web of documents and as such is stateless. the proper solution, IMO, is not to evolve ES for the benefit of ajax and webapps, but rather generalize the notion of a document browser that connects to a web of documents to a general purpose client engine that connects to a network of internet applications. thus the current web (document) browser just becomes one such internet application.
  •  
    the obvious conflict of interest between the standards-based web and proprietary platforms advanced by Microsoft, and the rationales for keeping the web's client-side programming language small while the proprietary platforms rapidly evolve support for large languages, does not help maintain the fiction that only clashing high-level philosophies are involved here. Readers may not know that Ecma has no provision for "minor releases" of its standards, so any ES3.1 that was approved by TG1 would inevitably be given a whole edition number, presumably becoming the 4th Edition of ECMAScript. This is obviously contentious given all the years that the majority of TG1, sometimes even apparently including Microsoft representatives, has worked on ES4, and the developer expectations set by this long-standing effort. A history of Microsoft's post-ES3 involvement in the ECMAScript standard group, leading up to the overt split in TG1 in March, is summarized here. The history of ECMAScript since its beginnings in November 1996 shows that when Microsoft was behind in the market (against Netscape in 1996-1997), it moved aggressively in the standards body to evolve standards starting with ES1 through ES3. Once Microsoft dominated the market, the last edition of the standard was left to rot -- ES3 was finished in 1999 -- and even easy-to-fix standards conformance bugs in IE JScript went unfixed for eight years (so three years to go from Edition 1 to 3, then over eight to approach Edition 4). Now that the proposed 4th edition looks like a competitive threat, the world suddenly hears in detail about all those bugs, spun as differences afflicting "JavaScript" that should inform a new standard.
Paul Merrell

ISPs say the "massive cost" of Snooper's Charter will push up UK broadband bills | Ars ... - 0 views

  • How much extra will you have to pay for the privilege of being spied on?
  • UK ISPs have warned MPs that the costs of implementing the Investigatory Powers Bill (aka the Snooper's Charter) will be much greater than the £175 million the UK government has allotted for the task, and that broadband bills will need to rise as a result. Representatives from ISPs and software companies told the House of Commons Science and Technology Committee that the legislation greatly underestimates the "sheer quantity" of data generated by Internet users these days. They also pointed out that distinguishing content from metadata is a far harder task than the government seems to assume. Matthew Hare, the chief executive of ISP Gigaclear, said with "a typical 1 gigabit connection to someone's home, over 50 terabytes of data per year [are] passing over it. If you say that a proportion of that is going to be the communications data—the record of who you communicate with, when you communicate or what you communicate—there would be the most massive and enormous amount of data that in future an access provider would be expected to keep. The indiscriminate collection of mass data across effectively every user of the Internet in this country is going to have a massive cost."
  • Moreover, the larger the cache of stored data, the more worthwhile it will be for criminals and state-backed actors to gain access and download that highly-revealing personal information for fraud and blackmail. John Shaw, the vice president of product management at British security firm Sophos, told the MPs: "There would be a huge amount of very sensitive personal data that could be used by bad guys.
  • ...2 more annotations...
  • The ISPs also challenged the government's breezy assumption that separating the data from the (equally revealing) metadata would be simple, not least because an Internet connection is typically being used for multiple services simultaneously, with data packets mixed together in a completely contingent way. Hare described a typical usage scenario for a teenager on their computer at home, where they are playing a game communicating with their friends using Steam; they are broadcasting the game using Twitch; and they may also be making a voice call at the same time too. "All those applications are running simultaneously," Hare said. "They are different applications using different servers with different services and different protocols. They are all running concurrently on that one machine." Even accessing a Web page is much more complicated than the government seems to believe, Hare pointed out. "As a webpage is loading, you will see that that webpage is made up of tens, or many tens, of individual sessions that have been created across the Internet just to load a single webpage. Bluntly, if you want to find out what someone is doing you need to be tracking all of that data all the time."
  • Hare raised another major issue. "If I was a software business ... I would be very worried that my customers would not buy my software any more if it had anything to do with security at all. I would be worried that a backdoor was built into the software by the [Investigatory Powers] Bill that would allow the UK government to find out what information was on that system at any point they wanted in the future." As Ars reported last week, the ability to demand that backdoors are added to systems, and a legal requirement not to reveal that fact under any circumstances, are two of the most contentious aspects of the new Investigatory Powers Bill. The latest comments from industry experts add to concerns that the latest version of the Snooper's Charter would inflict great harm on civil liberties in the UK, and also make security research well-nigh impossible here. To those fears can now be added undermining the UK software industry, as well as forcing the UK public to pay for the privilege of having their ISP carry out suspicionless surveillance.
Paul Merrell

Surveillance scandal rips through hacker community | Security & Privacy - CNET News - 0 views

  • One security start-up that had an encounter with the FBI was Wickr, a privacy-forward text messaging app for the iPhone with an Android version in private beta. Wickr's co-founder Nico Sell told CNET at Defcon, "Wickr has been approached by the FBI and asked for a backdoor. We said, 'No.'" The mistrust runs deep. "Even if [the NSA] stood up tomorrow and said that [they] have eliminated these programs," said Marlinspike, "How could we believe them? How can we believe that anything they say is true?" Where does security innovation go next? The immediate future of information security innovation most likely lies in software that provides an existing service but with heightened privacy protections, such as webmail that doesn't mine you for personal data.
  • Wickr's Sell thinks that her company has hit upon a privacy innovation that a few others are also doing, but many will soon follow: the company itself doesn't store user data. "[The FBI] would have to force us to build a new app. With the current app there's no way," she said, that they could incorporate backdoor access to Wickr users' texts or metadata. "Even if you trust the NSA 100 percent that they're going to use [your data] correctly," Sell said, "Do you trust that they're going to be able to keep it safe from hackers? What if somebody gets that database and posts it online?" To that end, she said, people will start seeing privacy innovation for services that don't currently provide it. Calling it "social networks 2.0," she said that social network competitors will arise that do a better job of protecting their customer's privacy and predicted that some that succeed will do so because of their emphasis on privacy. Abine's recent MaskMe browser add-on and mobile app for creating disposable e-mail addresses, phone numbers, and credit cards is another example of a service that doesn't have access to its own users' data.
  • Stamos predicted changes in services that companies with cloud storage offer, including offering customers the ability to store their data outside of the U.S. "If they want to stay competitive, they're going to have to," he said. But, he cautioned, "It's impossible to do a cloud-based ad supported service." Soghoian added, "The only way to keep a service running is to pay them money." This, he said, is going to give rise to a new wave of ad-free, privacy protective subscription services.
  • ...2 more annotations...
  • The issue with balancing privacy and surveillance is that the wireless carriers are not interested in privacy, he said. "They've been providing wiretapping for 100 years. Apple may in the next year protect voice calls," he said, and said that the best hope for ending widespread government surveillance will be the makers of mobile operating systems like Apple and Google. Not all upcoming security innovation will be focused on that kind of privacy protection. Security researcher Brandon Wiley showed off at Defcon a protocol he calls Dust that can obfuscate different kinds of network traffic, with the end goal of preventing censorship. "I only make products about letting you say what you want to say anywhere in the world," such as content critical of governments, he said. Encryption can hide the specifics of the traffic, but some governments have figured out that they can simply block all encrypted traffic, he said. The Dust protocol would change that, he said, making it hard to tell the difference between encrypted and unencrypted traffic. It's hard to build encryption into pre-existing products, Wiley said. "I think people are going to make easy-to-use, encrypted apps, and that's going to be the future."
  • Companies could face severe consequences from their security experts, said Stamos, if the in-house experts find out that they've been lied to about providing government access to customer data. You could see "lots of resignations and maybe publicly," he said. "It wouldn't hurt their reputations to go out in a blaze of glory." Perhaps not surprisingly, Marlinspike sounded a hopeful call for non-destructive activism on Defcon's 21st anniversary. "As hackers, we don't have a lot of influence on policy. I hope that's something that we can focus our energy on," he said.
  •  
    NSA as the cause of the next major disruption in the social networking service industry?  Grief ahead for Google? Note the point made that: "It's impossible to do a cloud-based ad supported service" where the encryption/decryption takes place on the client side. 
Paul Merrell

US pushing local cops to stay mum on surveillance - Yahoo News - 0 views

  • WASHINGTON (AP) -- The Obama administration has been quietly advising local police not to disclose details about surveillance technology they are using to sweep up basic cellphone data from entire neighborhoods, The Associated Press has learned. Citing security reasons, the U.S. has intervened in routine state public records cases and criminal trials regarding use of the technology. This has resulted in police departments withholding materials or heavily censoring documents in rare instances when they disclose any about the purchase and use of such powerful surveillance equipment. Federal involvement in local open records proceedings is unusual. It comes at a time when President Barack Obama has said he welcomes a debate on government surveillance and called for more transparency about spying in the wake of disclosures about classified federal surveillance programs.
  • One well-known type of this surveillance equipment is known as a Stingray, an innovative way for law enforcement to track cellphones used by suspects and gather evidence. The equipment tricks cellphones into identifying some of their owners' account information, like a unique subscriber number, and transmitting data to police as if it were a phone company's tower. That allows police to obtain cellphone information without having to ask for help from service providers, such as Verizon or AT&T, and can locate a phone without the user even making a call or sending a text message. But without more details about how the technology works and under what circumstances it's used, it's unclear whether the technology might violate a person's constitutional rights or whether it's a good investment of taxpayer dollars. Interviews, court records and public-records requests show the Obama administration is asking agencies to withhold common information about the equipment, such as how the technology is used and how to turn it on. That pushback has come in the form of FBI affidavits and consultation in local criminal cases.
  • "These extreme secrecy efforts are in relation to very controversial, local government surveillance practices using highly invasive technology," said Nathan Freed Wessler, a staff attorney with the American Civil Liberties Union, which has fought for the release of these types of records. "If public participation means anything, people should have the facts about what the government is doing to them." Harris Corp., a key manufacturer of this equipment, built a secrecy element into its authorization agreement with the Federal Communications Commission in 2011. That authorization has an unusual requirement: that local law enforcement "coordinate with the FBI the acquisition and use of the equipment." Companies like Harris need FCC authorization in order to sell wireless equipment that could interfere with radio frequencies. A spokesman from Harris Corp. said the company will not discuss its products for the Defense Department and law enforcement agencies, although public filings showed government sales of communications systems such as the Stingray accounted for nearly one-third of its $5 billion in revenue. "As a government contractor, our solutions are regulated and their use is restricted," spokesman Jim Burke said.
  • ...4 more annotations...
  • Local police agencies have been denying access to records about this surveillance equipment under state public records laws. Agencies in San Diego, Chicago and Oakland County, Michigan, for instance, declined to tell the AP what devices they purchased, how much they cost and with whom they shared information. San Diego police released a heavily censored purchasing document. Oakland officials said police-secrecy exemptions and attorney-client privilege keep their hands tied. It was unclear whether the Obama administration interfered in the AP requests. "It's troubling to think the FBI can just trump the state's open records law," said Ginger McCall, director of the open government project at the Electronic Privacy Information Center. McCall suspects the surveillance would not pass constitutional muster. "The vast amount of information it sweeps in is totally irrelevant to the investigation," she said.
  • A court case challenging the public release of information from the Tucson Police Department includes an affidavit from an FBI special agent, Bradley Morrison, who said the disclosure would "result in the FBI's inability to protect the public from terrorism and other criminal activity because through public disclosures, this technology has been rendered essentially useless for future investigations." Morrison said revealing any information about the technology would violate a federal homeland security law about information-sharing and arms-control laws — legal arguments that that outside lawyers and transparency experts said are specious and don't comport with court cases on the U.S. Freedom of Information Act. The FBI did not answer questions about its role in states' open records proceedings.
  • But a former Justice Department official said the federal government should be making this argument in federal court, not a state level where different public records laws apply. "The federal government appears to be attempting to assert a federal interest in the information being sought, but it's going about it the wrong way," said Dan Metcalfe, the former director of the Justice Department's office of information and privacy. Currently Metcalfe is the executive director of American University's law school Collaboration on Government Secrecy project. A criminal case in Tallahassee cites the same homeland security laws in Morrison's affidavit, court records show, and prosecutors told the court they consulted with the FBI to keep portions of a transcript sealed. That transcript, released earlier this month, revealed that Stingrays "force" cellphones to register their location and identifying information with the police device and enables officers to track calls whenever the phone is on.
  • One law enforcement official familiar with the Tucson lawsuit, who spoke on condition of anonymity because the official was not authorized to speak about internal discussions, said federal lawyers told Tucson police they couldn't hand over a PowerPoint presentation made by local officers about how to operate the Stingray device. Federal officials forwarded Morrison's affidavit for use in the Tucson police department's reply to the lawsuit, rather than requesting the case be moved to federal court. In Sarasota, Florida, the U.S. Marshals Service confiscated local records on the use of the surveillance equipment, removing the documents from the reach of Florida's expansive open-records law after the ACLU asked under Florida law to see the documents. The ACLU has asked a judge to intervene. The Marshals Service said it deputized the officer as a federal agent and therefore the records weren't accessible under Florida law.
  •  
    The Florida case is particularly interesting because Florida is within the jurisdiction of the U.S. Eleventh Circuit Court of Appeals, which has just ruled that law enforcement must obtain a search warrant from a court before using equipment to determine a cell phone's location.  
Paul Merrell

Tell Congress: My Phone Calls are My Business. Reform the NSA. | EFF Action Center - 3 views

  • The USA PATRIOT Act granted the government powerful new spying capabilities that have grown out of control—but the provision that the FBI and NSA have been using to collect the phone records of millions of innocent people expires on June 1. Tell Congress: it’s time to rethink out-of-control spying. A vote to reauthorize Section 215 is a vote against the Constitution.
  • On June 5, 2013, the Guardian published a secret court order showing that the NSA has interpreted Section 215 to mean that, with the help of the FBI, it can collect the private calling records of millions of innocent people. The government could even try to use Section 215 for bulk collection of financial records. The NSA’s defenders argue that invading our privacy is the only way to keep us safe. But the White House itself, along with the President’s Review Board has said that the government can accomplish its goals without bulk telephone records collection. And the Privacy and Civil Liberties Oversight Board said, “We have not identified a single instance involving a threat to the United States in which [bulk collection under Section 215 of the PATRIOT Act] made a concrete difference in the outcome of a counterterrorism investigation.” Since June of 2013, we’ve continued to learn more about how out of control the NSA is. But what has not happened since June is legislative reform of the NSA. There have been myriad bipartisan proposals in Congress—some authentic and some not—but lawmakers didn’t pass anything. We need comprehensive reform that addresses all the ways the NSA has overstepped its authority and provides the NSA with appropriate and constitutional tools to keep America safe. In the meantime, tell Congress to take a stand. A vote against reauthorization of Section 215 is a vote for the Constitution.
  •  
    EFF has launched an email campagin to press members of Congress not to renew sectiion 215 of the Patriot Act when it expires on June 1, 2015.   Sectjon 215 authorizes FBI officials to "make an application for an order requiring the production of *any tangible things* (including books, records, papers, documents, and other items) for an investigation to obtain foreign intelligence information not concerning a United States person or to protect against international terrorism or clandestine intelligence activities, provided that such investigation of a United States person is not conducted solely upon the basis of activities protected by the first amendment to the Constitution." http://www.law.cornell.edu/uscode/text/50/1861 The section has been abused to obtain bulk collecdtion of all telephone records for the NSA's storage and processing.But the section goes farther and lists as specific examples of records that can be obtained under section 215's authority, "library circulation records, library patron lists, book sales records, book customer lists, firearms sales records, tax return records, educational records, or medical records."  Think of the NSA's voracious appetite for new "haystacks" it can store  and search in its gigantic new data center in Utah. Then ask yourself, "do I want the NSA to obtain all of my personal data, store it, and search it at will?" If your anser is "no," you might consider visiting this page to send your Congress critters an email urging them to vote against renewal of section 215 and to vote for other NSA reforms listed in the EFF sample email text. Please do not procrastinate. Do it now, before you forget. Every voice counts. 
Gonzalo San Gil, PhD.

#KeepItOn - Access Now - 0 views

  •  
    "This December, world leaders will meet in Mexico to discuss the future of the internet. We're going to be there calling on them to put an end to internet shutdowns."
Paul Merrell

Profiled From Radio to Porn, British Spies Track Web Users' Online Identities | Global ... - 0 views

  • One system builds profiles showing people’s web browsing histories. Another analyzes instant messenger communications, emails, Skype calls, text messages, cell phone locations, and social media interactions. Separate programs were built to keep tabs on “suspicious” Google searches and usage of Google Maps. The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails and Internet browsing logs of Brits, Americans, and any other citizens  all without a court order or judicial warrant.
  • The power of KARMA POLICE was illustrated in 2009, when GCHQ launched a top-secret operation to collect intelligence about people using the Internet to listen to radio shows. The agency used a sample of nearly 7 million metadata records, gathered over a period of three months, to observe the listening habits of more than 200,000 people across 185 countries, including the U.S., the U.K., Ireland, Canada, Mexico, Spain, the Netherlands, France, and Germany.
  • GCHQ’s documents indicate that the plans for KARMA POLICE were drawn up between 2007 and 2008. The system was designed to provide the agency with “either (a) a web browsing profile for every visible user on the Internet, or (b) a user profile for every visible website on the Internet.” The origin of the surveillance system’s name is not discussed in the documents. But KARMA POLICE is also the name of a popular song released in 1997 by the Grammy Award-winning British band Radiohead, suggesting the spies may have been fans. A verse repeated throughout the hit song includes the lyric, “This is what you’ll get, when you mess with us.”
  • ...3 more annotations...
  • GCHQ vacuums up the website browsing histories using “probes” that tap into the international fiber-optic cables that transport Internet traffic across the world. A huge volume of the Internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis. Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events”  a term the agency uses to refer to metadata records  with about 10 billion new entries added every day. As of March 2009, the largest slice of data Black Hole held  41 percent  was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.
  • Throughout this period, as smartphone sales started to boom, the frequency of people’s Internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data. By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it saidwould be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.” HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the Internet.” Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media and news websites, search engines, chat forums, and blogs.
  • The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into Internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
Gary Edwards

Two Microsofts: Mulling an alternate reality | ZDNet - 1 views

  • Judge Jackson had it right. And the Court of Appeals? Not so much
  • Judge Jackson is an American hero and news of his passing thumped me hard. His ruling against Microsoft and the subsequent overturn of that ruling resulted, IMHO, in two extraordinary directions that changed the world. Sure the what-if game is interesting, but the reality itself is stunning enough. Of course, Judge Jackson sought to break the monopoly. The US Court of Appeals overturn resulted in the monopoly remaining intact, but the Internet remaining free and open. Judge Jackson's breakup plan had a good shot at achieving both a breakup of the monopoly and, a free and open Internet. I admit though that at the time I did not favor the Judge's plan. And i actually did submit a proposal based on Microsoft having to both support the WiNE project, and, provide a complete port to WiNE to any software provider requesting a port. I wanted to break the monopolist's hold on the Windows Productivity Environment and the hundreds of millions of investment dollars and time that had been spent on application development forever trapped on that platform. For me, it was the productivity platform that had to be broken.
  • I assume the good Judge thought that separating the Windows OS from Microsoft Office / Applications would force the OS to open up the secret API's even as the OS continued to evolve. Maybe. But a full disclosure of the API's coupled with the community service "port to WiNE" requirement might have sped up the process. Incredibly, the "Undocumented Windows Secrets" industry continues to thrive, and the legendary Andrew Schulman's number is still at the top of Silicon Valley legal profession speed dials. http://goo.gl/0UGe8 Oh well. The Court of Appeals stopped the breakup, leaving the Windows Productivity Platform intact. Microsoft continues to own the "client" in "Client/Server" computing. Although Microsoft was temporarily stopped from leveraging their desktop monopoly to an iron fisted control and dominance of the Internet, I think what were watching today with the Cloud is Judge Jackson's worst nightmare. And mine too. A great transition is now underway, as businesses and enterprises begin the move from legacy client/server business systems and processes to a newly emerging Cloud Productivity Platform. In this great transition, Microsoft holds an inside straight. They have all the aces because they own the legacy desktop productivity platform, and can control the transition to the Cloud. No doubt this transition is going to happen. And it will severely disrupt and change Microsoft's profit formula. But if the Redmond reprobate can provide a "value added" transition of legacy business systems and processes, and direct these new systems to the Microsoft Cloud, the profits will be immense.
  • ...1 more annotation...
  • Judge Jackson sought to break the ability of Microsoft to "leverage" their existing monopoly into the Internet and his plan was overturned and replaced by one based on judicial oversight. Microsoft got a slap on the wrist from the Court of Appeals, but were wailed on with lawsuits from the hundreds of parties injured by their rampant criminality. Some put the price of that criminality as high as $14 Billion in settlements. Plus, the shareholders forced Chairman Bill to resign. At the end of the day though, Chairman Bill was right. Keeping the monopoly intact was worth whatever penalty Microsoft was forced to pay. He knew that even the judicial over-site would end one day. Which it did. And now his company is ready to go for it all by leveraging and controlling the great productivity transition. No business wants to be hostage to a cold heart'd monopolist. But there is huge difference between a non-disruptive and cost effective, process-by-process value-added transition to a Cloud Productivity Platform, and, the very disruptive and costly "rip-out-and-replace" transition offered by Google, ZOHO, Box, SalesForce and other Cloud Productivity contenders. Microsoft, and only Microsoft, can offer the value-added transition path. If they get the Cloud even halfway right, they will own business productivity far into the future. Rest in Peace Judge Jackson. Your efforts were heroic and will be remembered as such. ~ge~
  •  
    Comments on the latest SVN article mulling the effects of Judge Thomas Penfield Jackson's anti trust ruling and proposed break up of Microsoft. comment: "Chinese Wall" Ummm, there was a Chinese Wall between Microsoft Os and the MS Applciations layer. At least that's what Chairman Bill promised developers at a 1990 OS/2-Windows Conference I attended. It was a developers luncheon, hosted by Microsoft, with Chairman Bill speaking to about 40 developers with applications designed to run on the then soon to be released Windows 3.0. In his remarks, the Chairman described his vision of commoditizing the personal computer market through an open hardware-reference platform on the one side of the Windows OS, and provisioning an open application developers layer on the other using open and totally transparent API's. Of course the question came up concerning the obvious advantage Microsoft applications would have. Chairman Bill answered the question by describing the Chinese Wall that existed between Microsoft's OS and Apps develop departments. He promised that OS API's would be developed privately and separate from the Apps department, and publicly disclosed to ALL developers at the same time. Oh yeah. There was lots of anti IBM - evil empire stuff too :) Of course we now know this was a line of crap. Microsoft Apps was discovered to have been using undocumented and secret Window API's. http://goo.gl/0UGe8. Microsoft Apps had a distinct advantage over the competition, and eventually the entire Windows Productivity Platform became dependent on the MSOffice core. The company I worked for back then, Pyramid Data, had the first Contact Management application for Windows; PowerLeads. Every Friday night we would release bug fixes and improvements using Wildcat BBS. By Monday morning we would be slammed with calls from users complaining that they had downloaded the Friday night patch, and now some other application would not load or function properly. Eventually we tracked th
Paul Merrell

NSA Spying Relies on AT&T's 'Extreme Willingness to Help' - ProPublica - 0 views

  • he National Security Agency’s ability to spy on vast quantities of Internet traffic passing through the United States has relied on its extraordinary, decades-long partnership with a single company: the telecom giant AT&T. While it has been long known that American telecommunications companies worked closely with the spy agency, newly disclosed NSA documents show that the relationship with AT&T has been considered unique and especially productive. One document described it as “highly collaborative,” while another lauded the company’s “extreme willingness to help.”
  • AT&T’s cooperation has involved a broad range of classified activities, according to the documents, which date from 2003 to 2013. AT&T has given the NSA access, through several methods covered under different legal rules, to billions of emails as they have flowed across its domestic networks. It provided technical assistance in carrying out a secret court order permitting the wiretapping of all Internet communications at the United Nations headquarters, a customer of AT&T. The NSA’s top-secret budget in 2013 for the AT&T partnership was more than twice that of the next-largest such program, according to the documents. The company installed surveillance equipment in at least 17 of its Internet hubs on American soil, far more than its similarly sized competitor, Verizon. And its engineers were the first to try out new surveillance technologies invented by the eavesdropping agency. One document reminds NSA officials to be polite when visiting AT&T facilities, noting: “This is a partnership, not a contractual relationship.” The documents, provided by the former agency contractor Edward Snowden, were jointly reviewed by The New York Times and ProPublica.
  • It is not clear if the programs still operate in the same way today. Since the Snowden revelations set off a global debate over surveillance two years ago, some Silicon Valley technology companies have expressed anger at what they characterize as NSA intrusions and have rolled out new encryption to thwart them. The telecommunications companies have been quieter, though Verizon unsuccessfully challenged a court order for bulk phone records in 2014. At the same time, the government has been fighting in court to keep the identities of its telecom partners hidden. In a recent case, a group of AT&T customers claimed that the NSA’s tapping of the Internet violated the Fourth Amendment protection against unreasonable searches. This year, a federal judge dismissed key portions of the lawsuit after the Obama administration argued that public discussion of its telecom surveillance efforts would reveal state secrets, damaging national security.
Gary Edwards

Siding with HTML over XHTML, My Decision to Switch - Monday By Noon - 0 views

  • Publishing content on the Web is in no way limited to professional developers or designers, much of the reason the net is so active is because anyone can make a website. Sure, we (as knowledgeable professionals or hobbyists) all hope to make the Web a better place by doing our part in publishing documents with semantically rich, valid markup, but the reality is that those documents are rare. It’s important to keep in mind the true nature of the Internet; an open platform for information sharing.
  • XHTML2 has some very good ideas that I hope can become part of the web. However, it’s unrealistic to think that all web authors will switch to an XML-based syntax which demands that browsers stop processing the document on the first error. XML’s draconian policy was an attempt to clean up the web. This was done around 1996 when lots of invalid content entered the web. CSS took a different approach: instead of demanding that content isn’t processed, we defined rules for how to handle the undefined. It’s called “forward-compatible parsing” and means we can add new constructs without breaking the old. So, I don’t think XHTML is a realistic option for the masses. HTML 5 is it.
    • Gary Edwards
       
      Great quote from CSS expert Hakon Wium Lie.
  • @marbux: Of course i disagree with your interop assessment, but I wondered how it is that you’re missing the point. I think you confuse web applications with legacy desktop – client/server application model. And that confusion leads to the mistake of trying to transfer the desktop document model to one that could adequately service advancing web applications.
  •  
    A CMS expert argues for HTML over XHTML, explaining his reasons for switching. Excellent read! He nails the basics. for similar reasons, we moved from ODF to ePUB and then to CDf and finally to the advanced WebKit document model, where wikiWORD will make it's stand.
  •  
    See also my comment on the same web page that explains why HTML 5 is NOT it for document exchange between web editing applications. .
  •  
    Response to marbux supporting the WebKit layout/document model. Marbux argues that HTML5 is not interoperable, and CSS2 near useless. HTML5 fails regarding the the interop web appplications need. I respond by arguing that the only way to look at web applications is to consider that the browser layout engine is the web application layout engine! Web applications are actually written to the browser layout/document model, OR, to take advantage of browser plug-in capabilities. The interoperability marbux seeks is tied directly to the browser layout engine. In this context, the web format is simply a reflection of that layout engine. If there's an interop problem, it comes from browser madness differentials. The good news is that there are all kinds of efforts to close the browser gap: including WHATWG - HTML5, CSS3, W3C DOM, JavaScript Libraries, Google GWT (Java to JavaScript), Yahoo GUI, and the my favorite; WebKit. The bad news is that the clock is ticking. Microsoft has pulled the trigger and the great migration of MSOffice client/server systems to the MS WebSTack-Mesh architecture has begun. Key to this transition are the WPF-.NET proprietary formats, protocols and interfaces such as XAML, Silverlight, LINQ, and Smart Tags. New business processes are being written, and old legacy desktop bound processes are being transitioned to this emerging platform. The fight for the Open Web is on, with Microsoft threatening to transtion their entire business desktop monopoly to a Web platfomr they own. ~ge~
Paul Merrell

The antitrust thing that won't blow over | Here we go again | The Economist - 0 views

  • Google, the industry’s newest giant, is also coming under closer scrutiny. On April 29th it emerged that America’s Justice Department is examining whether Google’s settlement with authors and publishers over its book-search service violates antitrust laws; and on May 5th the Federal Trade Commission (FTC) launched a probe to see whether Google’s sharing of two board members with Apple reduces competition between the two firms.
  • Similarly, antitrust lobbying is part of a broader “platform war” for IBM, which hopes thereby to keep Microsoft at bay. Among other things, IBM is a sponsor of the European Committee for Interoperable Systems (ECIS), which has many of Microsoft’s other competitors as its members and is one of the prime movers behind the new browser case. It started in late 2007 with a complaint by Opera, a Norwegian browser-maker and ECIS member. Not to be outdone, Microsoft has entered the antitrust game, too. It recently made an investment in T3, a small vendor of mainframe-like computers, which in January lodged a complaint with the European Commission, alleging that IBM kept it from competing by refusing to license mainframe software to T3’s customers. Microsoft has also lobbied American antitrust regulators to tackle Google, encouraging them to look into an online-advertising deal between the search giant and its rival, Yahoo!, which was eventually abandoned.
  • IBM, for its part, would appear to have little to fear. It is hard to argue, with so many different computer systems around, that mainframes still constitute a separate market—a necessary condition if IBM’s behaviour is to be judged anticompetitive.
1 - 14 of 14
Showing 20 items per page